In this paper, we present the Circular Accessible Depth (CAD), a robust traversability representation for an unmanned ground vehicle (UGV) to learn traversability in various scenarios containing irregular obstacles. To predict CAD, we propose a neural network, namely CADNet, with an attention-based multi-frame point cloud fusion module, Stability-Attention Module (SAM), to encode the spatial features from point clouds captured by LiDAR. CAD is designed based on the polar coordinate system and focuses on predicting the border of traversable area. Since it encodes the spatial information of the surrounding environment, which enables a semi-supervised learning for the CADNet, and thus desirably avoids annotating a large amount of data. Extensive experiments demonstrate that CAD outperforms baselines in terms of robustness and precision. We also implement our method on a real UGV and show that it performs well in real-world scenarios.
translated by 谷歌翻译
Advanced visual localization techniques encompass image retrieval challenges and 6 Degree-of-Freedom (DoF) camera pose estimation, such as hierarchical localization. Thus, they must extract global and local features from input images. Previous methods have achieved this through resource-intensive or accuracy-reducing means, such as combinatorial pipelines or multi-task distillation. In this study, we present a novel method called SuperGF, which effectively unifies local and global features for visual localization, leading to a higher trade-off between localization accuracy and computational efficiency. Specifically, SuperGF is a transformer-based aggregation model that operates directly on image-matching-specific local features and generates global features for retrieval. We conduct experimental evaluations of our method in terms of both accuracy and efficiency, demonstrating its advantages over other methods. We also provide implementations of SuperGF using various types of local features, including dense and sparse learning-based or hand-crafted descriptors.
translated by 谷歌翻译
通用域的适应性(UNIDA)旨在将公共类的知识从源域转移到目标域,而无需对标签集的任何先验知识,这需要将未知样本与目标域中的已知样本区分开。就像传统的无监督域适应问题一样,由于偏见和歧视性较低的嵌入,两个域之间的错位也存在。最新方法提出了通过将目标样品与最近的邻居或原型聚类来完成域未对准的方法。但是,这样做是很危险的,因为我们对未知样本的分布没有任何先验知识,这些样本可以放大错位,尤其是当未知集很大的时候。同时,其他现有基于分类器的方法可以轻松地产生对未知样本的过度自信预测,因为在源域中有监督的目标导致整个模型偏向于目标域中的共同类别。因此,我们提出了一种新型的非参数未知样品检测方法,基于将原始特征空间中的样品映射到可靠的线性子空间中,这使数据点更稀疏,以减少未知样品和源样本之间的不对准。此外,与最近应用额外参数以改善未知样品分类的方法不同,本文通过未知的自适应保证金损失可以很好地平衡已知样品和未知样品的置信值,从而可以控制分类器学习的梯度在有监督的来源上的梯度更新样品取决于当前步骤中检测到的未知样品的置信度。最后,在四个公共数据集上的实验表明,我们的方法显着胜过现有的最新方法。
translated by 谷歌翻译
卷积神经网络(CNN)已经实现了医学图像细分的最先进性能,但需要大量的手动注释进行培训。半监督学习(SSL)方法有望减少注释的要求,但是当数据集大小和注释图像的数量较小时,它们的性能仍然受到限制。利用具有类似解剖结构的现有注释数据集来协助培训,这有可能改善模型的性能。然而,由于目标结构的外观不同甚至成像方式,跨解剖结构域的转移进一步挑战。为了解决这个问题,我们提出了跨解剖结构域适应(CS-CADA)的对比度半监督学习,该学习适应一个模型以在目标结构域中细分相似的结构,这仅需要通过利用一组现有现有的现有的目标域中的限制注释源域中相似结构的注释图像。我们使用特定领域的批归归量表(DSBN)来单独地标准化两个解剖域的特征图,并提出跨域对比度学习策略,以鼓励提取域不变特征。它们被整合到一个自我兼容的均值老师(SE-MT)框架中,以利用具有预测一致性约束的未标记的目标域图像。广泛的实验表明,我们的CS-CADA能够解决具有挑战性的跨解剖结构域移位问题,从而在视网膜血管图像和心脏MR图像的帮助下,在X射线图像中准确分割冠状动脉,并借助底底图像,分别仅给定目标域中的少量注释。
translated by 谷歌翻译
通用域的适应性(UDA)旨在将公共类的知识从源域转移到目标域,而无需对标签集的任何先验知识,这需要将未知样本与目标域中的已知样本区分开。最近的方法更喜欢增加已知类别中样本间亲和力,而它们忽略了未知样本与已知样本之间的样本间亲和力。本文表明,利用这种样本间亲和力可以显着提高UDA的性能,并提出基于IT的知识性UDA框架。首先,我们通过在源域中搜索其相邻样本来估计每个目标样本的可知性。然后,我们提出了一种适用于估计的可知性的自动阈值方案,以确定目标样本是未知还是已知。接下来,除了增加每个已知类别的样本间亲和力(如先前的方法)外,我们还根据估计的可知性设计新损失,以减少未知目标样本与已知目标样本之间的样本间亲和力。最后,在四个公共数据集上的实验表明,我们的方法显着胜过现有的最新方法。
translated by 谷歌翻译
随着在充满挑战的环境中越来越需要多机器人探索未知区域的需求,需要有效的协作探索策略来实现此类壮举。可以部署基于边界的快速探索随机树(RRT)探索来探索未知的环境。然而,它的贪婪行为导致多个机器人探索收入最高的地区,从而导致勘探过程中大规模重叠。为了解决这个问题,我们提出了基于时间内存的RRT(TM-RRT)探索策略,用于多机器人在未知环境中执行强大的探索。它根据每个机器人的相对位置计算分配的每个边界的自适应持续时间,并计算边界的收入。此外,每个机器人都配备了由分配的边界和舰队共享的内存,以防止重复对同一边界的分配。通过模拟和实际部署,我们通过在25.0m x 540m(1350.0m2)区域完成勘探,展示了TM-RRT勘探策略的鲁棒性,而常规的RRT勘探策略则不足。
translated by 谷歌翻译
The development of social media user stance detection and bot detection methods rely heavily on large-scale and high-quality benchmarks. However, in addition to low annotation quality, existing benchmarks generally have incomplete user relationships, suppressing graph-based account detection research. To address these issues, we propose a Multi-Relational Graph-Based Twitter Account Detection Benchmark (MGTAB), the first standardized graph-based benchmark for account detection. To our knowledge, MGTAB was built based on the largest original data in the field, with over 1.55 million users and 130 million tweets. MGTAB contains 10,199 expert-annotated users and 7 types of relationships, ensuring high-quality annotation and diversified relations. In MGTAB, we extracted the 20 user property features with the greatest information gain and user tweet features as the user features. In addition, we performed a thorough evaluation of MGTAB and other public datasets. Our experiments found that graph-based approaches are generally more effective than feature-based approaches and perform better when introducing multiple relations. By analyzing experiment results, we identify effective approaches for account detection and provide potential future research directions in this field. Our benchmark and standardized evaluation procedures are freely available at: https://github.com/GraphDetec/MGTAB.
translated by 谷歌翻译
Interview has been regarded as one of the most crucial step for recruitment. To fully prepare for the interview with the recruiters, job seekers usually practice with mock interviews between each other. However, such a mock interview with peers is generally far away from the real interview experience: the mock interviewers are not guaranteed to be professional and are not likely to behave like a real interviewer. Due to the rapid growth of online recruitment in recent years, recruiters tend to have online interviews, which makes it possible to collect real interview data from real interviewers. In this paper, we propose a novel application named EZInterviewer, which aims to learn from the online interview data and provides mock interview services to the job seekers. The task is challenging in two ways: (1) the interview data are now available but still of low-resource; (2) to generate meaningful and relevant interview dialogs requires thorough understanding of both resumes and job descriptions. To address the low-resource challenge, EZInterviewer is trained on a very small set of interview dialogs. The key idea is to reduce the number of parameters that rely on interview dialogs by disentangling the knowledge selector and dialog generator so that most parameters can be trained with ungrounded dialogs as well as the resume data that are not low-resource. Evaluation results on a real-world job interview dialog dataset indicate that we achieve promising results to generate mock interviews. With the help of EZInterviewer, we hope to make mock interview practice become easier for job seekers.
translated by 谷歌翻译
Dynamic treatment regimes assign personalized treatments to patients sequentially over time based on their baseline information and time-varying covariates. In mobile health applications, these covariates are typically collected at different frequencies over a long time horizon. In this paper, we propose a deep spectral Q-learning algorithm, which integrates principal component analysis (PCA) with deep Q-learning to handle the mixed frequency data. In theory, we prove that the mean return under the estimated optimal policy converges to that under the optimal one and establish its rate of convergence. The usefulness of our proposal is further illustrated via simulations and an application to a diabetes dataset.
translated by 谷歌翻译
Temporal sentence grounding (TSG) aims to identify the temporal boundary of a specific segment from an untrimmed video by a sentence query. All existing works first utilize a sparse sampling strategy to extract a fixed number of video frames and then conduct multi-modal interactions with query sentence for reasoning. However, we argue that these methods have overlooked two indispensable issues: 1) Boundary-bias: The annotated target segment generally refers to two specific frames as corresponding start and end timestamps. The video downsampling process may lose these two frames and take the adjacent irrelevant frames as new boundaries. 2) Reasoning-bias: Such incorrect new boundary frames also lead to the reasoning bias during frame-query interaction, reducing the generalization ability of model. To alleviate above limitations, in this paper, we propose a novel Siamese Sampling and Reasoning Network (SSRN) for TSG, which introduces a siamese sampling mechanism to generate additional contextual frames to enrich and refine the new boundaries. Specifically, a reasoning strategy is developed to learn the inter-relationship among these frames and generate soft labels on boundaries for more accurate frame-query reasoning. Such mechanism is also able to supplement the absent consecutive visual semantics to the sampled sparse frames for fine-grained activity understanding. Extensive experiments demonstrate the effectiveness of SSRN on three challenging datasets.
translated by 谷歌翻译